Download Parametric Spatial Audio Effects
Parametric spatial audio coding methods aim to represent efficiently spatial information of recordings with psychoacoustically relevant parameters. In this study, it is presented how these parameters can be manipulated in various ways to achieve a series of spatial audio effects that modify the spatial distribution of a captured or synthesised sound scene, or alter the relation of its diffuse and directional content. Furthermore, it is discussed how the same representation can be used for spatial synthesis of complex sound sources and scenes. Finally, it is argued that the parametric description provides an efficient and natural way for designing spatial effects.
Download Velvet Noise Decorrelator
Decorrelation of audio signals is an important process in the spatial reproduction of sounds. For instance, a mono signal that is spread on multiple loudspeakers should be decorrelated for each channel to avoid undesirable comb-filtering artifacts. The process of decorrelating the signal itself is a compromise aiming to reduce the correlation as much as possible while minimizing both the sound coloration and the computing cost. A popular decorrelation method, convolving a sound signal with a short sequence of exponentially decaying white noise which, however, requires the use of the FFT for fast convolution and may cause some latency. Here we propose a decorrelator based on a sparse random sequence called velvet noise, which achieves comparable results without latency and at a smaller computing cost. A segmented temporal decay envelope can also be implemented for further optimizations. Using the proposed method, we found that a decorrelation filter, of similar perceptual attributes to white noise, could be implemented using 87% less operations. Informal listening tests suggest that the resulting decorrelation filter performs comparably to an equivalent white-noise filter.
Download Spherical Decomposition of Arbitrary Scattering Geometries for Virtual Acoustic Environments
A method is proposed to encode the acoustic scattering of objects for virtual acoustic applications through a multiple-input and multiple-output framework. The scattering is encoded as a matrix in the spherical harmonic domain, and can be re-used and manipulated (rotated, scaled and translated) to synthesize various sound scenes. The proposed method is applied and validated using Boundary Element Method simulations which shows accurate results between references and synthesis. The method is compatible with existing frameworks such as Ambisonics and image source methods.
Download Parametric Spatial Audio Effects Based on the Multi-Directional Decomposition of Ambisonic Sound Scenes
Decomposing a sound-field into its individual components and respective parameters can represent a convenient first-step towards offering the user an intuitive means of controlling spatial audio effects and sound-field modification tools. The majority of such tools available today, however, are instead limited to linear combinations of signals or employ a basic single-source parametric model. Therefore, the purpose of this paper is to present a parametric framework, which seeks to overcome these limitations by first dividing the sound-field into its multi-source and ambient components based on estimated spatial parameters. It is then demonstrated that by manipulating the spatial parameters prior to reproducing the scene, a number of sound-field modification and spatial audio effects may be realised; including: directional warping, listener translation, sound source tracking, spatial editing workflows and spatial side-chaining. Many of the effects described have also been implemented as real-time audio plug-ins, in order to demonstrate how a user may interact with such tools in practice.